6 resultados para MULTIFACTOR-DIMENSIONALITY REDUCTION

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is known that the empirical orthogonal function method is unable to detect possible nonlinear structure in climate data. Here, isometric feature mapping (Isomap), as a tool for nonlinear dimensionality reduction, is applied to 1958–2001 ERA-40 sea-level pressure anomalies to study nonlinearity of the Asian summer monsoon intraseasonal variability. Using the leading two Isomap time series, the probability density function is shown to be bimodal. A two-dimensional bivariate Gaussian mixture model is then applied to identify the monsoon phases, the obtained regimes representing enhanced and suppressed phases, respectively. The relationship with the large-scale seasonal mean monsoon indicates that the frequency of monsoon regime occurrence is significantly perturbed in agreement with conceptual ideas, with preference for enhanced convection on intraseasonal time scales during large-scale strong monsoons. Trend analysis suggests a shift in concentration of monsoon convection, with less emphasis on South Asia and more on the East China Sea.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper is concerned with tensor clustering with the assistance of dimensionality reduction approaches. A class of formulation for tensor clustering is introduced based on tensor Tucker decomposition models. In this formulation, an extra tensor mode is formed by a collection of tensors of the same dimensions and then used to assist a Tucker decomposition in order to achieve data dimensionality reduction. We design two types of clustering models for the tensors: PCA Tensor Clustering model and Non-negative Tensor Clustering model, by utilizing different regularizations. The tensor clustering can thus be solved by the optimization method based on the alternative coordinate scheme. Interestingly, our experiments show that the proposed models yield comparable or even better performance compared to most recent clustering algorithms based on matrix factorization.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Learning low dimensional manifold from highly nonlinear data of high dimensionality has become increasingly important for discovering intrinsic representation that can be utilized for data visualization and preprocessing. The autoencoder is a powerful dimensionality reduction technique based on minimizing reconstruction error, and it has regained popularity because it has been efficiently used for greedy pretraining of deep neural networks. Compared to Neural Network (NN), the superiority of Gaussian Process (GP) has been shown in model inference, optimization and performance. GP has been successfully applied in nonlinear Dimensionality Reduction (DR) algorithms, such as Gaussian Process Latent Variable Model (GPLVM). In this paper we propose the Gaussian Processes Autoencoder Model (GPAM) for dimensionality reduction by extending the classic NN based autoencoder to GP based autoencoder. More interestingly, the novel model can also be viewed as back constrained GPLVM (BC-GPLVM) where the back constraint smooth function is represented by a GP. Experiments verify the performance of the newly proposed model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Low vitamin D status has been shown to be a risk factor for several metabolic traits such as obesity, diabetes and cardiovascular disease. The biological actions of 1, 25-dihydroxyvitamin D, are mediated through the vitamin D receptor (VDR), which heterodimerizes with retinoid X receptor, gamma (RXRG). Hence, we examined the potential interactions between the tagging polymorphisms in the VDR (22 tag SNPs) and RXRG (23 tag SNPs) genes on metabolic outcomes such as body mass index, waist circumference, waist-hip ratio (WHR), high- and low-density lipoprotein (LDL) cholesterols, serum triglycerides, systolic and diastolic blood pressures and glycated haemoglobin in the 1958 British Birth Cohort (1958BC, up to n = 5,231). We used Multifactor- dimensionality reduction (MDR) program as a non-parametric test to examine for potential interactions between the VDR and RXRG gene polymorphisms in the 1958BC. We used the data from Northern Finland Birth Cohort 1966 (NFBC66, up to n = 5,316) and Twins UK (up to n = 3,943) to replicate our initial findings from 1958BC. RESULTS: After Bonferroni correction, the joint-likelihood ratio test suggested interactions on serum triglycerides (4 SNP - SNP pairs), LDL cholesterol (2 SNP - SNP pairs) and WHR (1 SNP - SNP pair) in the 1958BC. MDR permutation model testing analysis showed one two-way and one three-way interaction to be statistically significant on serum triglycerides in the 1958BC. In meta-analysis of results from two replication cohorts (NFBC66 and Twins UK, total n = 8,183), none of the interactions remained after correction for multiple testing (Pinteraction >0.17). CONCLUSIONS: Our results did not provide strong evidence for interactions between allelic variations in VDR and RXRG genes on metabolic outcomes; however, further replication studies on large samples are needed to confirm our findings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Empirical orthogonal function (EOF) analysis is a powerful tool for data compression and dimensionality reduction used broadly in meteorology and oceanography. Often in the literature, EOF modes are interpreted individually, independent of other modes. In fact, it can be shown that no such attribution can generally be made. This review demonstrates that in general individual EOF modes (i) will not correspond to individual dynamical modes, (ii) will not correspond to individual kinematic degrees of freedom, (iii) will not be statistically independent of other EOF modes, and (iv) will be strongly influenced by the nonlocal requirement that modes maximize variance over the entire domain. The goal of this review is not to argue against the use of EOF analysis in meteorology and oceanography; rather, it is to demonstrate the care that must be taken in the interpretation of individual modes in order to distinguish the medium from the message.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Locality to other nodes on a peer-to-peer overlay network can be established by means of a set of landmarks shared among the participating nodes. Each node independently collects a set of latency measures to landmark nodes, which are used as a multi-dimensional feature vector. Each peer node uses the feature vector to generate a unique scalar index which is correlated to its topological locality. A popular dimensionality reduction technique is the space filling Hilbert’s curve, as it possesses good locality preserving properties. However, there exists little comparison between Hilbert’s curve and other techniques for dimensionality reduction. This work carries out a quantitative analysis of their properties. Linear and non-linear techniques for scaling the landmark vectors to a single dimension are investigated. Hilbert’s curve, Sammon’s mapping and Principal Component Analysis have been used to generate a 1d space with locality preserving properties. This work provides empirical evidence to support the use of Hilbert’s curve in the context of locality preservation when generating peer identifiers by means of landmark vector analysis. A comparative analysis is carried out with an artificial 2d network model and with a realistic network topology model with a typical power-law distribution of node connectivity in the Internet. Nearest neighbour analysis confirms Hilbert’s curve to be very effective in both artificial and realistic network topologies. Nevertheless, the results in the realistic network model show that there is scope for improvements and better techniques to preserve locality information are required.